我们在电影推荐任务上评估了两种流行的本地解释性技术,即石灰和外形。我们发现,这两种方法的行为取决于数据集的稀疏性。在数据集的密集段中,石灰的表现要好,而在稀疏段中,shap的表现更好。我们将这种差异追溯到石灰和摇动​​基础估计量的不同偏差变化特征。我们发现,与石灰相比,SHAP在数据的稀疏段中表现出较低的方差。我们将这种较低的差异归因于Shap和Lime中缺少的完整性约束属性。该约束是正规化器,因此增加了Shap估计器的偏差,但会降低其差异,从而导致良好的偏见差异权衡,尤其是在高稀疏数据设置中。有了这个见解,我们将相同的约束引入石灰,并制定了一个新颖的局部解释框架,称为完整性约束的石灰(攀爬),比石灰优于石灰,速度比Shap更快。
translated by 谷歌翻译
Practically all of the planning research is limited to states represented in terms of Boolean and numeric state variables. Many practical problems, for example, planning inside complex software systems, require far more complex data types, and even real-world planning in many cases requires concepts such as sets of objects, which are not convenient to express in modeling languages with scalar types only. In this work, we investigate a modeling language for complex software systems, which supports complex data types such as sets, arrays, records, and unions. We give a reduction of a broad range of complex data types and their operations to Boolean logic, and then map this representation further to PDDL to be used with domain-independent PDDL planners. We evaluate the practicality of this approach, and provide solutions to some of the issues that arise in the PDDL translation.
translated by 谷歌翻译
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within an overparameterized model produced after pruning are often called Lottery tickets. This research aims to generate winning lottery tickets from a set of lottery tickets that can achieve similar accuracy to the original unpruned network. We introduce a novel winning ticket called Cyclic Overlapping Lottery Ticket (COLT) by data splitting and cyclic retraining of the pruned network from scratch. We apply a cyclic pruning algorithm that keeps only the overlapping weights of different pruned models trained on different data segments. Our results demonstrate that COLT can achieve similar accuracies (obtained by the unpruned model) while maintaining high sparsities. We show that the accuracy of COLT is on par with the winning tickets of Lottery Ticket Hypothesis (LTH) and, at times, is better. Moreover, COLTs can be generated using fewer iterations than tickets generated by the popular Iterative Magnitude Pruning (IMP) method. In addition, we also notice COLTs generated on large datasets can be transferred to small ones without compromising performance, demonstrating its generalizing capability. We conduct all our experiments on Cifar-10, Cifar-100 & TinyImageNet datasets and report superior performance than the state-of-the-art methods.
translated by 谷歌翻译
在许多真实世界应用程序的组合匪徒如内容缓存,必须在满足最小服务要求的同时最大化奖励。此外,基本ARM可用性随着时间的推移而变化,并且采取的行动需要适应奖励最大化的情况。我们提出了一个名为Contexal Combinatial Volatile Birtits的新的强盗模型,具有组阈值来解决这些挑战。我们的模型通过考虑超级臂作为基础臂组的子集来归档组合匪徒。我们寻求最大化超级手臂奖励,同时满足构成超级臂的所有基座组的阈值。为此,我们定义了一个新的遗憾遗嘱,使超级臂奖励最大化与团体奖励满意度合并。为了便于学习,我们假设基臂的平均结果是由上下文索引的高斯过程的样本,并且预期的奖励是Lipschitz在预期的基础臂结果中连续。我们提出了一种算法,称为阈值组合高斯工艺的上置信度界限(TCGP-UCB),最大化累积奖励和满足组奖励阈值之间的余额,并证明它会导致$ \ tilde {o}(k \ sqrt {t \ overline { \ gamma} _ {t}})$后悔具有高概率,其中$ \ overline {\ gamma} _ {t} $是与第一个$ t $轮中出现的基本arm上下文相关联的最大信息增益$ k $是所有在所有轮匝上任何可行行动的超级臂基数。我们在实验中展示了我们的算法累积了与最先进的组合强盗算法相当的奖励,同时采摘群体满足其阈值的动作。
translated by 谷歌翻译
我们展示了域不变特征学习(DIFL)可以改善深度学习结核筛查算法的域名概括性。众所周知,由于“域移位”,最深入的深度学习算法通常具有难以推广的概念数据分布。在医学成像的背景下,这可能导致意外的偏见,例如从一个患者人口到另一个患者人口的无法概括。我们分析了reset-50分类器的性能,以便用四个最受欢迎的公共数据集在地理上不同的图像来源的核化性筛选的目的。我们表明,如果没有域适应,Reset-50难以通过来自地理分布区域的图像从许多公共结核病筛查数据集之间概括成像分布。然而,随着DIFL的掺入,域外的性能大大提高了。分析标准包括对基线的准确性,灵敏度,特异性和AUC的比较,以及DIFL增强算法。我们得出结论,DIFL在应用跨各种公共数据集时保持结核筛查的易用性,同时在源域图像上保持可接受的准确性。
translated by 谷歌翻译
我们考虑优化从高斯过程(GP)采样的矢量值的目标函数$ \ boldsymbol {f} $ sampled的问题,其索引集是良好的,紧凑的度量空间$({\ cal x},d)$设计。我们假设$ \ boldsymbol {f} $之前未知,并且在Design $ x $的$ \ \ boldsymbol {f} $ x $导致$ \ boldsymbol {f}(x)$。由于当$ {\ cal x} $很大的基数时,识别通过详尽搜索的帕累托最优设计是不可行的,因此我们提出了一种称为Adaptive $ \ Boldsymbol {\ epsilon} $ - PAL的算法,从而利用GP的平滑度-Ampled函数和$({\ cal x},d)$的结构快速学习。从本质上讲,Adaptive $ \ Boldsymbol {\ epsilon} $ - PAL采用基于树的自适应离散化技术,以识别$ \ Boldsymbol {\ epsilon} $ - 尽可能少的评估中的准确帕累托一组设计。我们在$ \ boldsymbol {\ epsilon} $ - 准确的Pareto Set识别上提供信息类型和度量尺寸类型界限。我们还在实验表明我们的算法在多个基准数据集上优于其他Pareto Set识别方法。
translated by 谷歌翻译